latent space model
Information-theoretic Limits for Community Detection in Network Models
We analyze the information-theoretic limits for the recovery of node labels in several network models. This includes the Stochastic Block Model, the Exponential Random Graph Model, the Latent Space Model, the Directed Preferential Attachment Model, and the Directed Small-world Model. For the Stochastic Block Model, the non-recoverability condition depends on the probabilities of having edges inside a community, and between different communities. For the Latent Space Model, the non-recoverability condition depends on the dimension of the latent space, and how far and spread are the communities in the latent space. For the Directed Preferential Attachment Model and the Directed Small-world Model, the non-recoverability condition depends on the ratio between homophily and neighborhood size. We also consider dynamic versions of the Stochastic Block Model and the Latent Space Model.
Information-theoretic Limits for Community Detection in Network Models
We analyze the information-theoretic limits for the recovery of node labels in several network models. This includes the Stochastic Block Model, the Exponential Random Graph Model, the Latent Space Model, the Directed Preferential Attachment Model, and the Directed Small-world Model. For the Stochastic Block Model, the non-recoverability condition depends on the probabilities of having edges inside a community, and between different communities. For the Latent Space Model, the non-recoverability condition depends on the dimension of the latent space, and how far and spread are the communities in the latent space. For the Directed Preferential Attachment Model and the Directed Small-world Model, the non-recoverability condition depends on the ratio between homophily and neighborhood size. We also consider dynamic versions of the Stochastic Block Model and the Latent Space Model.
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (5 more...)
- Leisure & Entertainment (0.68)
- Education (0.68)
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.04)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (5 more...)
- Leisure & Entertainment (0.68)
- Education (0.68)
Transfer learning under latent space model
Fang, Kuangnan, Qin, Ruixuan, Fan, Xinyan
Latent space model plays a crucial role in network analysis, and accurate estimation of latent variables is essential for downstream tasks such as link prediction. However, the large number of parameters to be estimated presents a challenge, especially when the latent space dimension is not exceptionally small. In this paper, we propose a transfer learning method that leverages information from networks with latent variables similar to those in the target network, thereby improving the estimation accuracy for the target. Given transferable source networks, we introduce a two-stage transfer learning algorithm that accommodates differences in node numbers between source and target networks. In each stage, we derive sufficient identification conditions and design tailored projected gradient descent algorithms for estimation. Theoretical properties of the resulting estimators are established. When the transferable networks are unknown, a detection algorithm is introduced to identify suitable source networks. Simulation studies and analyses of two real datasets demonstrate the effectiveness of the proposed methods.
- Asia > China > Fujian Province > Xiamen (0.04)
- Europe > Middle East > Cyprus > Nicosia > Nicosia (0.04)
GRAND: Graph Release with Assured Node Differential Privacy
Liu, Suqing, Bi, Xuan, Li, Tianxi
Differential privacy is a well-established framework for safeguarding sensitive information in data. While extensively applied across various domains, its application to network data -- particularly at the node level -- remains underexplored. Existing methods for node-level privacy either focus exclusively on query-based approaches, which restrict output to pre-specified network statistics, or fail to preserve key structural properties of the network. In this work, we propose GRAND (Graph Release with Assured Node Differential privacy), which is, to the best of our knowledge, the first network release mechanism that releases entire networks while ensuring node-level differential privacy and preserving structural properties. Under a broad class of latent space models, we show that the released network asymptotically follows the same distribution as the original network. The effectiveness of the approach is evaluated through extensive experiments on both synthetic and real-world datasets.
- North America > United States > North Carolina (0.04)
- North America > United States > Minnesota (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- (5 more...)
A Transfer Learning Framework for Multilayer Networks via Model Averaging
Link prediction in multilayer networks is a key challenge in applications such as recommendation systems and protein-protein interaction prediction. While many techniques have been developed, most rely on assumptions about shared structures and require access to raw auxiliary data, limiting their practicality. To address these issues, we propose a novel transfer learning framework for multilayer networks using a bi-level model averaging method. A $K$-fold cross-validation criterion based on edges is used to automatically weight inter-layer and intra-layer candidate models. This enables the transfer of information from auxiliary layers while mitigating model uncertainty, even without prior knowledge of shared structures. Theoretically, we prove the optimality and weight convergence of our method under mild conditions. Computationally, our framework is efficient and privacy-preserving, as it avoids raw data sharing and supports parallel processing across multiple servers. Simulations show our method outperforms others in predictive accuracy and robustness. We further demonstrate its practical value through two real-world recommendation system applications.
- North America > United States (0.14)
- Asia > China > Anhui Province > Hefei (0.04)
- Europe > Denmark (0.04)
- Asia > China > Beijing > Beijing (0.04)
Latent Guided Sampling for Combinatorial Optimization
Surendran, Sobihan, Fermanian, Adeline, Corff, Sylvain Le
Combinatorial Optimization (CO) consists of finding the best solution from a discrete set of possibilities by optimizing a given objective function subject to constraints. It has widespread applications across various domains, including vehicle routing (Veres and Moussa, 2019), production planning (Dolgui et al., 2019), and drug discovery (Liu et al., 2017). However, its NP-hard nature and the complexity of many problem variants make solving CO problems highly challenging. Traditional heuristic methods (e.g., (Kirkpatrick et al., 1983; Glover, 1989; Mladenovi c and Hansen, 1997)) rely on hand-crafted rules to guide the search, providing near-optimal solutions with significantly lower computational costs. Inspired by the success of deep learning in computer vision (Krizhevsky et al., 2012; He et al., 2016) and natural language processing (Vaswani et al., 2017; Devlin, 2018), recent years have seen a surge in learning-based Neural Combinatorial Optimization (NCO) approaches for solving CO problems, including the Travelling Salesman Problem (TSP) and the Capacitated Vehicle Routing Problem (CVRP). 1
- Europe > France > Île-de-France > Paris > Paris (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- Overview (0.67)
- Research Report (0.64)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.87)